迄今为止,通信系统主要旨在可靠地交流位序列。这种方法提供了有效的工程设计,这些设计对消息的含义或消息交换所旨在实现的目标不可知。但是,下一代系统可以通过将消息语义和沟通目标折叠到其设计中来丰富。此外,可以使这些系统了解进行交流交流的环境,从而为新颖的设计见解提供途径。本教程总结了迄今为止的努力,从早期改编,语义意识和以任务为导向的通信开始,涵盖了基础,算法和潜在的实现。重点是利用信息理论提供基础的方法,以及学习在语义和任务感知通信中的重要作用。
translated by 谷歌翻译
Computer vision and machine learning are playing an increasingly important role in computer-assisted diagnosis; however, the application of deep learning to medical imaging has challenges in data availability and data imbalance, and it is especially important that models for medical imaging are built to be trustworthy. Therefore, we propose TRUDLMIA, a trustworthy deep learning framework for medical image analysis, which adopts a modular design, leverages self-supervised pre-training, and utilizes a novel surrogate loss function. Experimental evaluations indicate that models generated from the framework are both trustworthy and high-performing. It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
translated by 谷歌翻译
对异常域特定视频集的有效分析是一个重要的实践问题,在该问题中,最新的通用模型仍面临局限性。因此,希望设计基准数据集,以挑战具有其他约束的特定领域的新型强大模型。重要的是要记住,特定域的数据可能更嘈杂(例如,内窥镜或水下视频),并且通常需要更多经验丰富的用户才能有效搜索。在本文中,我们专注于从水下环境中移动相机拍摄的单次视频,这构成了研究目的的非平凡挑战。提出了新的海洋视频套件数据集的第一个碎片,用于用于视频检索和其他计算机视觉挑战。除了基本的元数据统计数据外,我们还基于低级特征以及所选密钥帧的语义注释提供了几个见解和参考图。该分析还包含实验,显示了检索受人尊敬的通用模型的局限性。
translated by 谷歌翻译
建立具有可信赖性的AI模型非常重要,尤其是在医疗保健等受监管的地区。在解决Covid-19时,以前的工作将卷积神经网络用作骨干建筑,该骨干建筑物易于过度宣告和过度自信做出决策,使它们不那么值得信赖 - 在医学成像背景下的关键缺陷。在这项研究中,我们提出了一种使用视觉变压器的功能学习方法,该方法使用基于注意力的机制,并检查变形金刚作为医学成像的新骨干结构的表示能力。通过对COVID-19胸部X光片进行分类的任务,我们研究了概括能力是否仅从视觉变形金刚的建筑进步中受益。通过使用“信任评分”计算和视觉解释性技术,对模型的可信度进行了定量和定性评估。我们得出的结论是,基于注意力的特征学习方法在建立可信赖的医疗保健深度学习模型方面有希望。
translated by 谷歌翻译
联合学习已被提议作为隐私的机器学习框架,该框架使多个客户能够在不共享原始数据的情况下进行协作。但是,在此框架中,设计并不能保证客户隐私保护。先前的工作表明,联邦学习中的梯度共享策略可能容易受到数据重建攻击的影响。但是,实际上,考虑到高沟通成本或由于增强隐私要求,客户可能不会传输原始梯度。实证研究表明,梯度混淆,包括通过梯度噪声注入和通过梯度压缩的无意化混淆的意图混淆,可以提供更多的隐私保护,以防止重建攻击。在这项工作中,我们提出了一个针对联合学习中图像分类任务的新数据重建攻击框架。我们表明,通常采用的梯度后处理程序,例如梯度量化,梯度稀疏和梯度扰动,可能会在联合学习中具有错误的安全感。与先前的研究相反,我们认为不应将隐私增强视为梯度压缩的副产品。此外,我们在提出的框架下设计了一种新方法,以在语义层面重建图像。我们量化语义隐私泄漏,并根据图像相似性分数进行比较。我们的比较挑战了文献中图像数据泄漏评估方案。结果强调了在现有联合学习算法中重新审视和重新设计对客户数据的隐私保护机制的重要性。
translated by 谷歌翻译
联合学习(FL)是一种保护隐私的范式,其中多个参与者共同解决机器学习问题而无需共享原始数据。与传统的分布式学习不同,FL的独特特征是统计异质性,即,跨参与者的数据分布彼此不同。同时,神经网络解释的最新进展已广泛使用神经切线核(NTK)进行收敛分析。在本文中,我们提出了一个新颖的FL范式,该范式由NTK框架赋予了能力。该范式通过传输比常规FL范式更具表现力的更新数据来解决统计异质性的挑战。具体而言,通过样本的雅各布矩阵,而不是模型的权重/梯度,由参与者上传。然后,服务器构建了经验内核矩阵,以更新全局模型,而无需明确执行梯度下降。我们进一步开发了一种具有提高沟通效率和增强隐私性的变体。数值结果表明,与联邦平均相比,所提出的范式可以达到相同的精度,同时将通信弹的数量减少数量级。
translated by 谷歌翻译
Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, workers transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. We show that our proposed method can reduce quantization error and converges faster compared with the methods directly quantizing the model updates.
translated by 谷歌翻译
联合学习可以使远程工作人员能够协作培训共享机器学习模型,同时允许在本地保持训练数据。在无线移动设备的用例中,由于功率和带宽有限,通信开销是关键瓶颈。前工作已经利用了各种数据压缩工具,例如量化和稀疏,以减少开销。在本文中,我们提出了一种用于联合学习的预测编码的压缩方案。该方案在所有设备中具有共享预测功能,并且允许每个工作人员发送来自参考的压缩残余矢量。在每个通信中,我们基于速率失真成本选择预测器和量化器,并进一步降低熵编码的冗余。广泛的模拟表明,与其他基线方法相比,甚至更好的学习性能,通信成本可以减少高达99%。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译